178 research outputs found

    Relative contributions to vergence eye movements of two binocular cues for motion-in-depth

    Get PDF
    When we track an object moving in depth, our eyes rotate in opposite directions. This type of "disjunctive" eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal

    Vertical Binocular Disparity is Encoded Implicitly within a Model Neuronal Population Tuned to Horizontal Disparity and Orientation

    Get PDF
    Primary visual cortex is often viewed as a “cyclopean retina”, performing the initial encoding of binocular disparities between left and right images. Because the eyes are set apart horizontally in the head, binocular disparities are predominantly horizontal. Yet, especially in the visual periphery, a range of non-zero vertical disparities do occur and can influence perception. It has therefore been assumed that primary visual cortex must contain neurons tuned to a range of vertical disparities. Here, I show that this is not necessarily the case. Many disparity-selective neurons are most sensitive to changes in disparity orthogonal to their preferred orientation. That is, the disparity tuning surfaces, mapping their response to different two-dimensional (2D) disparities, are elongated along the cell's preferred orientation. Because of this, even if a neuron's optimal 2D disparity has zero vertical component, the neuron will still respond best to a non-zero vertical disparity when probed with a sub-optimal horizontal disparity. This property can be used to decode 2D disparity, even allowing for realistic levels of neuronal noise. Even if all V1 neurons at a particular retinotopic location are tuned to the expected vertical disparity there (for example, zero at the fovea), the brain could still decode the magnitude and sign of departures from that expected value. This provides an intriguing counter-example to the common wisdom that, in order for a neuronal population to encode a quantity, its members must be tuned to a range of values of that quantity. It demonstrates that populations of disparity-selective neurons encode much richer information than previously appreciated. It suggests a possible strategy for the brain to extract rarely-occurring stimulus values, while concentrating neuronal resources on the most commonly-occurring situations

    Spatial Stereoresolution for Depth Corrugations May Be Set in Primary Visual Cortex

    Get PDF
    Stereo “3D” depth perception requires the visual system to extract binocular disparities between the two eyes' images. Several current models of this process, based on the known physiology of primary visual cortex (V1), do this by computing a piecewise-frontoparallel local cross-correlation between the left and right eye's images. The size of the “window” within which detectors examine the local cross-correlation corresponds to the receptive field size of V1 neurons. This basic model has successfully captured many aspects of human depth perception. In particular, it accounts for the low human stereoresolution for sinusoidal depth corrugations, suggesting that the limit on stereoresolution may be set in primary visual cortex. An important feature of the model, reflecting a key property of V1 neurons, is that the initial disparity encoding is performed by detectors tuned to locally uniform patches of disparity. Such detectors respond better to square-wave depth corrugations, since these are locally flat, than to sinusoidal corrugations which are slanted almost everywhere. Consequently, for any given window size, current models predict better performance for square-wave disparity corrugations than for sine-wave corrugations at high amplitudes. We have recently shown that this prediction is not borne out: humans perform no better with square-wave than with sine-wave corrugations, even at high amplitudes. The failure of this prediction raised the question of whether stereoresolution may actually be set at later stages of cortical processing, perhaps involving neurons tuned to disparity slant or curvature. Here we extend the local cross-correlation model to include existing physiological and psychophysical evidence indicating that larger disparities are detected by neurons with larger receptive fields (a size/disparity correlation). We show that this simple modification succeeds in reconciling the model with human results, confirming that stereoresolution for disparity gratings may indeed be limited by the size of receptive fields in primary visual cortex

    The Effect of Interocular Phase Difference on Perceived Contrast

    Get PDF
    Binocular vision is traditionally treated as two processes: the fusion of similar images, and the interocular suppression of dissimilar images (e.g. binocular rivalry). Recent work has demonstrated that interocular suppression is phase-insensitive, whereas binocular summation occurs only when stimuli are in phase. But how do these processes affect our perception of binocular contrast? We measured perceived contrast using a matching paradigm for a wide range of interocular phase offsets (0–180°) and matching contrasts (2–32%). Our results revealed a complex interaction between contrast and interocular phase. At low contrasts, perceived contrast reduced monotonically with increasing phase offset, by up to a factor of 1.6. At higher contrasts the pattern was non-monotonic: perceived contrast was veridical for in-phase and antiphase conditions, and monocular presentation, but increased a little at intermediate phase angles. These findings challenge a recent model in which contrast perception is phase-invariant. The results were predicted by a binocular contrast gain control model. The model involves monocular gain controls with interocular suppression from positive and negative phase channels, followed by summation across eyes and then across space. Importantly, this model—applied to conditions with vertical disparity—has only a single (zero) disparity channel and embodies both fusion and suppression processes within a single framework

    First- and second-order contributions to depth perception in anti-correlated random dot stereograms.

    Get PDF
    The binocular energy model of neural responses predicts that depth from binocular disparity might be perceived in the reversed direction when the contrast of dots presented to one eye is reversed. While reversed-depth has been found using anti-correlated random-dot stereograms (ACRDS) the findings are inconsistent across studies. The mixed findings may be accounted for by the presence of a gap between the target and surround, or as a result of overlap of dots around the vertical edges of the stimuli. To test this, we assessed whether (1) the gap size (0, 19.2 or 38.4 arc min) (2) the correlation of dots or (3) the border orientation (circular target, or horizontal or vertical edge) affected the perception of depth. Reversed-depth from ACRDS (circular no-gap condition) was seen by a minority of participants, but this effect reduced as the gap size increased. Depth was mostly perceived in the correct direction for ACRDS edge stimuli, with the effect increasing with the gap size. The inconsistency across conditions can be accounted for by the relative reliability of first- and second-order depth detection mechanisms, and the coarse spatial resolution of the latter

    Modelling human visual navigation using multi-view scene reconstruction

    Get PDF
    It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision

    Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects

    Get PDF
    Our understanding of the neural underpinnings of perception is largely built upon studies employing 2-dimensional (2D) planar images. Here we used slow event-related functional imaging in humans to examine whether neural populations show a characteristic repetition-related change in haemodynamic response for real-world 3-dimensional (3D) objects, an effect commonly observed using 2D images. As expected, trials involving 2D pictures of objects produced robust repetition effects within classic object-selective cortical regions along the ventral and dorsal visual processing streams. Surprisingly, however, repetition effects were weak, if not absent on trials involving the 3D objects. These results suggest that the neural mechanisms involved in processing real objects may therefore be distinct from those that arise when we encounter a 2D representation of the same items. These preliminary results suggest the need for further research with ecologically valid stimuli in other imaging designs to broaden our understanding of the neural mechanisms underlying human vision

    Predictors of adherence to a multifaceted podiatry intervention for the prevention of falls in older people

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Despite emerging evidence that foot problems and inappropriate footwear increase the risk of falls, there is little evidence as to whether foot-related intervention strategies can be successfully implemented. The aim of this study was to evaluate adherence rates, barriers to adherence, and the predictors of adherence to a multifaceted podiatry intervention for the prevention of falls in older people.</p> <p>Methods</p> <p>The intervention group (n = 153, mean age 74.2 years) of a randomised trial that investigated the effectiveness of a multifaceted podiatry intervention to prevent falls was assessed for adherence to the three components of the intervention: (i) foot orthoses, (ii) footwear advice and footwear cost subsidy, and (iii) a home-based foot and ankle exercise program. Adherence to each component and the barriers to adherence were documented, and separate discriminant function analyses were undertaken to identify factors that were significantly and independently associated with adherence to the three intervention components.</p> <p>Results</p> <p>Adherence to the three components of the intervention was as follows: foot orthoses (69%), footwear (54%) and home-based exercise (72%). Discriminant function analyses identified that being younger was the best predictor of orthoses use, higher physical health status and lower fear of falling were independent predictors of footwear adherence, and higher physical health status was the best predictor of exercise adherence. The predictive accuracy of these models was only modest, with 62 to 71% of participants correctly classified.</p> <p>Conclusions</p> <p>Adherence to a multifaceted podiatry intervention in this trial ranged from 54 to 72%. People with better physical health, less fear of falling and a younger age exhibited greater adherence, suggesting that strategies need to be developed to enhance adherence in frailer older people who are most at risk of falling.</p> <p>Trial registration</p> <p>Australian New Zealand Clinical Trials Registry <a href="http://www.anzctr.org.au/ACTRN12608000065392.aspx">ACTRN12608000065392</a>.</p

    Properties of V1 Neurons Tuned to Conjunctions of Visual Features: Application of the V1 Saliency Hypothesis to Visual Search behavior

    Get PDF
    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target
    corecore